-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Akoumparouli/nemo ux precision plugin refactor #10129
Merged
akoumpa
merged 16 commits into
main
from
akoumparouli/nemo_ux_precision_plugin_refactor
Aug 20, 2024
Merged
Akoumparouli/nemo ux precision plugin refactor #10129
akoumpa
merged 16 commits into
main
from
akoumparouli/nemo_ux_precision_plugin_refactor
Aug 20, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
akoumpa
force-pushed
the
akoumparouli/nemo_ux_precision_plugin_refactor
branch
3 times, most recently
from
August 13, 2024 20:57
eacd5c0
to
a93319e
Compare
akoumpa
force-pushed
the
akoumparouli/nemo_ux_precision_plugin_refactor
branch
3 times, most recently
from
August 13, 2024 21:58
715a3db
to
1d92ddb
Compare
akoumpa
force-pushed
the
akoumparouli/nemo_ux_precision_plugin_refactor
branch
5 times, most recently
from
August 13, 2024 23:27
c00d64a
to
88126e3
Compare
ShriyaPalsamudram
requested review from
marcromeyn
and removed request for
marcromeyn
August 14, 2024 14:24
akoumpa
force-pushed
the
akoumparouli/nemo_ux_precision_plugin_refactor
branch
5 times, most recently
from
August 15, 2024 02:16
830db7e
to
7700ae6
Compare
ShriyaPalsamudram
previously approved these changes
Aug 15, 2024
marcromeyn
reviewed
Aug 15, 2024
marcromeyn
reviewed
Aug 15, 2024
marcromeyn
reviewed
Aug 15, 2024
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: akoumpa <akoumpa@users.noreply.github.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com>
akoumpa
force-pushed
the
akoumparouli/nemo_ux_precision_plugin_refactor
branch
from
August 20, 2024 16:44
4dd9ba8
to
d5cf9f9
Compare
ShriyaPalsamudram
approved these changes
Aug 20, 2024
Dido0o0
pushed a commit
to Dido0o0/NeMo
that referenced
this pull request
Aug 23, 2024
* fix dropout Signed-off-by: Chen Cui <chcui@nvidia.com> * fix gemma embedding Signed-off-by: Chen Cui <chcui@nvidia.com> * more config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * llama3 rotary base Signed-off-by: Chen Cui <chcui@nvidia.com> * remove persist_layer_norm Signed-off-by: Chen Cui <chcui@nvidia.com> * remove dtype configs as they're handled in NVIDIA#10129 Signed-off-by: Chen Cui <chcui@nvidia.com> * gemma embedding scaling without model transform Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * remove superfluous import Signed-off-by: Chen Cui <chcui@nvidia.com> --------- Signed-off-by: Chen Cui <chcui@nvidia.com> Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> Co-authored-by: cuichenx <cuichenx@users.noreply.github.com>
Dido0o0
pushed a commit
to Dido0o0/NeMo
that referenced
this pull request
Aug 23, 2024
* rename mixed_precision.py to precision.py Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * replace print with logging.warning Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * also patch ddp_config Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Rename patch_dtype_config to update_config_with_dtype_overrides Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add GradScaler's args to constructor's arg list Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * fix import Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Leverage mcore's fp16 grad scaler Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused param Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add precision plugin test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * Also update __io__ configs Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused imports Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix fabric to ptl converter mcore precision plugin Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> --------- Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
adityavavre
pushed a commit
to adityavavre/NeMo
that referenced
this pull request
Sep 15, 2024
* fix dropout Signed-off-by: Chen Cui <chcui@nvidia.com> * fix gemma embedding Signed-off-by: Chen Cui <chcui@nvidia.com> * more config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * llama3 rotary base Signed-off-by: Chen Cui <chcui@nvidia.com> * remove persist_layer_norm Signed-off-by: Chen Cui <chcui@nvidia.com> * remove dtype configs as they're handled in NVIDIA#10129 Signed-off-by: Chen Cui <chcui@nvidia.com> * gemma embedding scaling without model transform Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * remove superfluous import Signed-off-by: Chen Cui <chcui@nvidia.com> --------- Signed-off-by: Chen Cui <chcui@nvidia.com> Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> Co-authored-by: cuichenx <cuichenx@users.noreply.github.com> Signed-off-by: adityavavre <aditya.vavre@gmail.com>
adityavavre
pushed a commit
to adityavavre/NeMo
that referenced
this pull request
Sep 15, 2024
* rename mixed_precision.py to precision.py Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * replace print with logging.warning Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * also patch ddp_config Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Rename patch_dtype_config to update_config_with_dtype_overrides Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add GradScaler's args to constructor's arg list Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * fix import Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Leverage mcore's fp16 grad scaler Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused param Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add precision plugin test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * Also update __io__ configs Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused imports Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix fabric to ptl converter mcore precision plugin Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> --------- Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> Co-authored-by: akoumpa <akoumpa@users.noreply.github.com> Signed-off-by: adityavavre <aditya.vavre@gmail.com>
monica-sekoyan
pushed a commit
that referenced
this pull request
Oct 14, 2024
* fix dropout Signed-off-by: Chen Cui <chcui@nvidia.com> * fix gemma embedding Signed-off-by: Chen Cui <chcui@nvidia.com> * more config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * config matching Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * llama3 rotary base Signed-off-by: Chen Cui <chcui@nvidia.com> * remove persist_layer_norm Signed-off-by: Chen Cui <chcui@nvidia.com> * remove dtype configs as they're handled in #10129 Signed-off-by: Chen Cui <chcui@nvidia.com> * gemma embedding scaling without model transform Signed-off-by: Chen Cui <chcui@nvidia.com> * Apply isort and black reformatting Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> * remove superfluous import Signed-off-by: Chen Cui <chcui@nvidia.com> --------- Signed-off-by: Chen Cui <chcui@nvidia.com> Signed-off-by: cuichenx <cuichenx@users.noreply.github.com> Co-authored-by: cuichenx <cuichenx@users.noreply.github.com>
monica-sekoyan
pushed a commit
that referenced
this pull request
Oct 14, 2024
* rename mixed_precision.py to precision.py Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * replace print with logging.warning Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * also patch ddp_config Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Rename patch_dtype_config to update_config_with_dtype_overrides Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add GradScaler's args to constructor's arg list Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * fix import Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Leverage mcore's fp16 grad scaler Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused param Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Add precision plugin test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * Apply isort and black reformatting Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> * Also update __io__ configs Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * remove unused imports Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix fabric to ptl converter mcore precision plugin Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> * fix test Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> --------- Signed-off-by: Alexandros Koumparoulis <akoumparouli@nvidia.com> Signed-off-by: akoumpa <akoumpa@users.noreply.github.com> Co-authored-by: akoumpa <akoumpa@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
Add a one line overview of what this PR aims to accomplish.
Collection: [Note which collection this PR will affect]
Changelog
Usage
# Add a code snippet demonstrating how to use this
GitHub Actions CI
The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.
The GitHub Actions CI will run automatically when the "Run CICD" label is added to the PR.
To re-run CI remove and add the label again.
To run CI on an untrusted fork, a NeMo user with write access must first click "Approve and run".
Before your PR is "Ready for review"
Pre checks:
PR Type:
If you haven't finished some of the above items you can still open "Draft" PR.
Who can review?
Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.
Additional Information